Learning task-oriented grasping for tool manipulation from simulated self-supervision
نویسندگان
چکیده
منابع مشابه
Task-Level Object Grasping for Simulated Agents
Simulating a human figure performing a manual task requires that the agent interact with objects in the environment in a realistic manner. Graphic or programming interfaces to control human figure animation, however, do not allow the animator to instruct the system with concise "high-level" commands. Instructions coming from a high-level planner cannot be directly given to a synthetic agent b...
متن کاملSelf-Supervision for Reinforcement Learning
Reinforcement learning optimizes policies for expected cumulative reward. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, making it a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquit...
متن کاملLearning Human Priors for Task-Constrained Grasping
An autonomous agent using manmade objects must understand how task conditions the grasp placement. In this paper we formulate task based robotic grasping as a feature learning problem. Using a human demonstrator to provide examples of grasps associated with a specific task, we learn a representation, such that similarity in task is reflected by similarity in feature. The learned representation ...
متن کاملA Self-learning Controller For Monocular Grasping
|A method is presented to learn 3D grasping of objects with unknown dimensions using a monocular eye-in-hand manipulator. From a sequence of images a motion proole is generated to approach the object of unknown size. It is shown that monocular visual information suuces to control the deceleration of the robot manipulator. A strategy for generating learning samples is presented, and simulation r...
متن کاملSemantic and Geometric Scene Understanding for Task-oriented Grasping of Novel Objects from a Single View
We present a task-oriented grasp model, that learns grasps that are configurationally compatible with a given task. The model consists of a geometric grasp model, and a semantic grasp model. The geometric model relies on a dictionary of grasp prototypes that are learned from experience, while the semantic model is CNN-based and identifies scene regions that are compatible with a specific task. ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: The International Journal of Robotics Research
سال: 2019
ISSN: 0278-3649,1741-3176
DOI: 10.1177/0278364919872545